Introdução

Esse notebook traz as analises pedidas na disciplina Biologia Evolutiva - Bio507

Neste tutorial vamos utilizar a a linguagem Python 2.7 e os pacotes numpy, pandas, dendropy e matplotlib

Leitura de dados


In [2]:
import numpy as np
import pandas as pd
import dendropy  as dp
import matplotlib as mpl

Primeiro vamos ler os dados usando o pandas. A função read_csv cria um data.frame com os dados.


In [3]:
import numpy as np
import pandas as pd
import dendropy  as dp
import matplotlib as mpl

dados_brutos = pd.read_csv("./dados.csv")
num_traits = 4
traits = dados_brutos.columns[:num_traits]
dados_brutos.head(10)


Out[3]:
UMERO ULNA FEMUR TIBIA ESPECIE
0 3.214619 8.036072 13.517787 19.011308 A
1 5.222875 10.838375 15.547144 21.017178 A
2 5.193021 11.911770 17.247190 22.431903 A
3 6.547400 11.293085 14.890399 20.177057 A
4 4.724383 9.897135 14.811035 19.597533 A
5 4.630147 9.441174 15.247539 19.259374 A
6 6.771560 11.905400 15.571640 22.067775 A
7 5.229635 9.881575 13.987716 19.058813 A
8 4.786389 10.020607 15.676585 19.628285 A
9 4.171654 8.998204 15.523059 20.066215 A

Podemos usar a função describe para ter um resumo global, olhando para médias, desvio padrão e quartis.


In [4]:
dados_brutos.describe()


Out[4]:
UMERO ULNA FEMUR TIBIA
count 300.000000 300.000000 300.000000 300.000000
mean 8.329141 16.710483 22.099096 29.439649
std 2.290569 4.370596 4.484063 5.912966
min 2.720256 7.145574 13.307339 18.199906
25% 6.752338 14.004285 19.904228 26.598481
50% 8.577857 16.742268 21.910598 28.775488
75% 10.197404 20.604040 26.021909 34.724381
max 13.067744 24.320296 30.659812 39.132379

Podemos também usar a função groupby para aplicar funções em sub conjuntos dos dados, agrupando pela coluna ESPECIE


In [5]:
dados_brutos.groupby('ESPECIE').describe()


Out[5]:
FEMUR TIBIA ULNA UMERO
ESPECIE
A count 60.000000 60.000000 60.000000 60.000000
mean 15.111193 20.179938 9.949203 4.948718
std 1.001978 1.033350 1.042259 0.925434
min 13.307339 18.199906 7.145574 2.720256
25% 14.418396 19.531194 9.439593 4.460753
50% 15.047091 20.018262 9.961670 4.951055
75% 15.583092 20.841174 10.626844 5.450730
max 17.963502 22.431903 11.990388 6.771560
B count 60.000000 60.000000 60.000000 60.000000
mean 25.583995 34.039911 17.043157 8.449116
std 0.959801 1.026290 0.976311 1.043582
min 23.771423 31.814285 15.259521 5.956538
25% 24.942347 33.298108 16.489761 7.780216
50% 25.467266 34.001643 16.742268 8.407886
75% 26.215958 34.762250 17.629024 9.257750
max 27.800655 36.145970 19.498368 10.353900
C count 60.000000 60.000000 60.000000 60.000000
mean 21.545636 28.411278 20.234410 10.086848
std 1.093285 0.954835 0.967816 0.992821
min 19.100210 26.204498 18.536947 7.915180
25% 20.828731 27.711861 19.513071 9.368775
50% 21.689081 28.312005 20.082332 10.029947
75% 22.222772 29.228187 20.898986 10.751811
max 24.249630 30.351911 22.437891 12.299911
D count 60.000000 60.000000 60.000000 60.000000
mean 27.701677 37.039024 21.917249 10.764505
std 1.092377 0.954970 0.999072 0.997346
min 25.050549 34.571030 19.323098 8.294274
25% 27.085591 36.360565 21.359714 10.082454
50% 27.638454 37.123435 21.845116 10.749358
75% 28.133082 37.622279 22.445897 11.335582
max 30.659812 39.132379 24.320296 13.067744
E count 60.000000 60.000000 60.000000 60.000000
mean 20.552979 27.528095 14.408397 7.396516
std 0.954628 1.002057 0.800716 0.984802
min 17.708506 25.826498 12.339996 5.426811
25% 20.119694 26.641851 14.004285 6.797075
50% 20.474181 27.393652 14.384812 7.340141
75% 21.161520 28.284489 14.957284 7.970026
max 22.462786 29.440323 16.104371 10.177678

Vamos calcular as médias, desvios padrão e coeficiantes de variação por caractere por espécie.


In [6]:
medias = dados_brutos.groupby('ESPECIE').mean()
medias


Out[6]:
UMERO ULNA FEMUR TIBIA
ESPECIE
A 4.948718 9.949203 15.111193 20.179938
B 8.449116 17.043157 25.583995 34.039911
C 10.086848 20.234410 21.545636 28.411278
D 10.764505 21.917249 27.701677 37.039024
E 7.396516 14.408397 20.552979 27.528095

In [7]:
std = dados_brutos.groupby('ESPECIE').std()
std


Out[7]:
UMERO ULNA FEMUR TIBIA
ESPECIE
A 0.925434 1.042259 1.001978 1.033350
B 1.043582 0.976311 0.959801 1.026290
C 0.992821 0.967816 1.093285 0.954835
D 0.997346 0.999072 1.092377 0.954970
E 0.984802 0.800716 0.954628 1.002057

In [8]:
cv = std/medias
cv


Out[8]:
UMERO ULNA FEMUR TIBIA
ESPECIE
A 0.187005 0.104758 0.066307 0.051207
B 0.123514 0.057285 0.037516 0.030150
C 0.098427 0.047830 0.050743 0.033608
D 0.092651 0.045584 0.039434 0.025783
E 0.133144 0.055573 0.046447 0.036401

Visualizando os dados

Vamos usar o pacote matplotlib para fazer gráficos diagnósticos dos dados.


In [9]:
# histogramas

In [10]:
# scatter plots

In [11]:
# boxplots

Calculando matrizes de covariância

Vamos calcular as matrizes de covariância e correlação por espécie, além das médias, e agrupá-las num dicionário.


In [12]:
cov_matrices = dados_brutos.groupby('ESPECIE').apply(lambda x: x.cov())
cor_matrices = dados_brutos.groupby('ESPECIE').apply(lambda x: x.corr())
especies_labels = list(pd.unique(dados_brutos['ESPECIE']))

In [13]:
cov_matrices


Out[13]:
UMERO ULNA FEMUR TIBIA
ESPECIE
A UMERO 0.856428 0.775719 0.216278 0.275482
ULNA 0.775719 1.086305 0.317295 0.357705
FEMUR 0.216278 0.317295 1.003959 0.796279
TIBIA 0.275482 0.357705 0.796279 1.067811
B UMERO 1.089062 0.441458 0.173216 0.197140
ULNA 0.441458 0.953183 0.037394 0.137664
FEMUR 0.173216 0.037394 0.921217 0.679280
TIBIA 0.197140 0.137664 0.679280 1.053271
C UMERO 0.985693 0.757909 0.016274 0.328934
ULNA 0.757909 0.936668 0.085370 0.336382
FEMUR 0.016274 0.085370 1.195272 0.173524
TIBIA 0.328934 0.336382 0.173524 0.911711
D UMERO 0.994699 0.776167 0.562616 0.558410
ULNA 0.776167 0.998145 0.790311 0.636777
FEMUR 0.562616 0.790311 1.193287 0.605724
TIBIA 0.558410 0.636777 0.605724 0.911968
E UMERO 0.969835 -0.096421 0.020960 0.360699
ULNA -0.096421 0.641145 0.010365 0.127960
FEMUR 0.020960 0.010365 0.911314 0.248547
TIBIA 0.360699 0.127960 0.248547 1.004119

In [14]:
cor_matrices


Out[14]:
UMERO ULNA FEMUR TIBIA
ESPECIE
A UMERO 1.000000 0.804235 0.233244 0.288072
ULNA 0.804235 1.000000 0.303829 0.332125
FEMUR 0.233244 0.303829 1.000000 0.769060
TIBIA 0.288072 0.332125 0.769060 1.000000
B UMERO 1.000000 0.433286 0.172934 0.184068
ULNA 0.433286 1.000000 0.039905 0.137392
FEMUR 0.172934 0.039905 1.000000 0.689601
TIBIA 0.184068 0.137392 0.689601 1.000000
C UMERO 1.000000 0.788776 0.014993 0.346984
ULNA 0.788776 1.000000 0.080683 0.364008
FEMUR 0.014993 0.080683 1.000000 0.166225
TIBIA 0.346984 0.364008 0.166225 1.000000
D UMERO 1.000000 0.778955 0.516409 0.586296
ULNA 0.778955 1.000000 0.724150 0.667422
FEMUR 0.516409 0.724150 1.000000 0.580647
TIBIA 0.586296 0.667422 0.580647 1.000000
E UMERO 1.000000 -0.122277 0.022295 0.365514
ULNA -0.122277 1.000000 0.013559 0.159479
FEMUR 0.022295 0.013559 1.000000 0.259826
TIBIA 0.365514 0.159479 0.259826 1.000000

Podemos também acessar as matrizes pelo nome da espécie:


In [15]:
cor_matrices.T['C']


Out[15]:
UMERO ULNA FEMUR TIBIA
UMERO 1.000000 0.788776 0.014993 0.346984
ULNA 0.788776 1.000000 0.080683 0.364008
FEMUR 0.014993 0.080683 1.000000 0.166225
TIBIA 0.346984 0.364008 0.166225 1.000000

Ou fazer um heatplot de todas elas


In [16]:
#heat plots

Analise de variância

Calculando as matrizes dentro de grupos, entre grupos e total


In [17]:
# Matrizes dentro e entre grupos

Comparando as matrizes


In [18]:
# RandomSkewers

Filogenia

Vamos incluir uma filogenia e calcular estados ancestrais


In [21]:
tree = dp.Tree.get_from_string("(E, ((C, B)4,(A,D)3)2)1;", "newick")
tree.print_plot(display_width = 50, show_internal_node_labels = True, leaf_spacing_factor = 4)


/----------------------------------------------- E
|                                                 
|                                                 
|                                                 
|                               /--------------- C
1                               |                 
|               /---------------4                 
|               |               |                 
|               |               \--------------- B
|               |                                 
\---------------2                                 
                |                                 
                |               /--------------- A
                |               |                 
                \---------------3                 
                                |                 
                                \--------------- D
                                                  
                                                  
                                                  
                                                  

Esse código usa as matrizes e médias calculadas anteriormente, junto com os tamanhos amostrais, para calcular valores ponderados para todos os nós da filogenia.

A conta realizada para médias e matrizes é uma simples média ponderada.


In [22]:
get_node_name = lambda n: str(n.label or n.taxon or None)
nodes = [get_node_name(n) for n in tree.nodes()]

node_matrices = {}
node_sample_size = {}
for sp in especies_labels:
    new_matrix = np.array(cov_matrices.T[sp])
    node_matrices[sp] = new_matrix
    node_sample_size[sp] = dados_brutos[dados_brutos['ESPECIE'] == sp].shape[0]

# Tirando quem nao esta na filogenia e trocando os keys

node_means = {}

for sp in especies_labels:
    if tree.find_node_with_taxon_label(sp):
        new_key = get_node_name(tree.find_node_with_taxon_label(sp))
        node_means[new_key] = medias.T[sp]
        node_sample_size[new_key] = node_sample_size.pop(sp)
        node_matrices[new_key] = node_matrices.pop(sp)
    else:
        node_matrices.pop(sp)
        node_sample_size.pop(sp)

# Função que recebe uma lista de filhos e calcula a matriz, média e tamanho amostral pro ancestral

def ancestral_mean(child_labels):
    new_matrix = np.zeros((num_traits, num_traits))
    sample = 0
    new_mean = np.zeros(num_traits)
    for child in child_labels:
        node = get_node_name(child)
        new_matrix = new_matrix +\
            node_sample_size[node] * node_matrices[node]
        sample = sample + node_sample_size[node]
        new_mean = new_mean + node_sample_size[node] * node_means[node]
    new_matrix = new_matrix/sample
    new_mean = new_mean/sample
    return new_matrix, sample, new_mean

# Calculando as matrizes e tamanhos amostrais para todos os nós

for n in tree.postorder_node_iter():
    if get_node_name(n) not in node_matrices:
        node_matrices[get_node_name(n)], node_sample_size[get_node_name(n)], node_means[get_node_name(n)] = ancestral_mean(n.child_nodes())

Isso resulta num dicionário, chamado node_matrices, com todas as matrizes para todos os nós.


In [23]:
node_matrices


Out[23]:
{"'A'": array([[ 0.8564279 ,  0.77571855,  0.21627841,  0.27548205],
        [ 0.77571855,  1.08630478,  0.31729452,  0.35770483],
        [ 0.21627841,  0.31729452,  1.0039593 ,  0.79627907],
        [ 0.27548205,  0.35770483,  0.79627907,  1.06781131]]),
 "'B'": array([[ 1.08906246,  0.44145759,  0.17321563,  0.19714026],
        [ 0.44145759,  0.95318347,  0.03739387,  0.13766425],
        [ 0.17321563,  0.03739387,  0.92121709,  0.67928003],
        [ 0.19714026,  0.13766425,  0.67928003,  1.05327109]]),
 "'C'": array([[ 0.9856927 ,  0.75790946,  0.01627433,  0.3289341 ],
        [ 0.75790946,  0.9366678 ,  0.08537023,  0.33638186],
        [ 0.01627433,  0.08537023,  1.19527238,  0.17352372],
        [ 0.3289341 ,  0.33638186,  0.17352372,  0.91171074]]),
 "'D'": array([[ 0.99469886,  0.77616684,  0.56261621,  0.55840955],
        [ 0.77616684,  0.99814468,  0.79031092,  0.63677673],
        [ 0.56261621,  0.79031092,  1.1932873 ,  0.60572369],
        [ 0.55840955,  0.63677673,  0.60572369,  0.91196794]]),
 "'E'": array([[ 0.9698346 , -0.09642127,  0.02096006,  0.36069899],
        [-0.09642127,  0.64114545,  0.01036461,  0.12796018],
        [ 0.02096006,  0.01036461,  0.9113138 ,  0.24854724],
        [ 0.36069899,  0.12796018,  0.24854724,  1.00411854]]),
 '1': array([[ 0.97914331,  0.53096623,  0.19786893,  0.34413299],
        [ 0.53096623,  0.92308924,  0.24814683,  0.31929757],
        [ 0.19786893,  0.24814683,  1.04500997,  0.50067075],
        [ 0.34413299,  0.31929757,  0.50067075,  0.98977592]]),
 '2': array([[ 0.98147048,  0.68781311,  0.24209615,  0.33999149],
        [ 0.68781311,  0.99357518,  0.30759238,  0.36713192],
        [ 0.24209615,  0.30759238,  1.07843402,  0.56370163],
        [ 0.33999149,  0.36713192,  0.56370163,  0.98619027]]),
 '3': array([[ 0.92556338,  0.77594269,  0.38944731,  0.4169458 ],
        [ 0.77594269,  1.04222473,  0.55380272,  0.49724078],
        [ 0.38944731,  0.55380272,  1.0986233 ,  0.70100138],
        [ 0.4169458 ,  0.49724078,  0.70100138,  0.98988962]]),
 '4': array([[ 1.03737758,  0.59968352,  0.09474498,  0.26303718],
        [ 0.59968352,  0.94492564,  0.06138205,  0.23702306],
        [ 0.09474498,  0.06138205,  1.05824473,  0.42640187],
        [ 0.26303718,  0.23702306,  0.42640187,  0.98249091]])}

Um dicionário de tamanhos amostrais


In [24]:
node_sample_size


Out[24]:
{"'A'": 60,
 "'B'": 60,
 "'C'": 60,
 "'D'": 60,
 "'E'": 60,
 '1': 300,
 '2': 240,
 '3': 120,
 '4': 120}

E um dicionário de médias.


In [25]:
node_means['1']


Out[25]:
UMERO     8.329141
ULNA     16.710483
FEMUR    22.099096
TIBIA    29.439649
dtype: float64

Temos tb uma lista de todos os nós:


In [26]:
nodes


Out[26]:
['1', "'E'", '2', '4', "'C'", "'B'", '3', "'A'", "'D'"]

É interessante notar como a matriz estimada para a raiz, ponderando as matrizes ao longo da filogenia, é idêntica à matriz ponderada dentro de grupos, calculada pelos modelos lineares anteriormente:


In [27]:
w_matrix


---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-27-dccbe7669f95> in <module>()
----> 1 w_matrix

NameError: name 'w_matrix' is not defined

In [29]:
node_matrices['4']


Out[29]:
array([[ 1.03737758,  0.59968352,  0.09474498,  0.26303718],
       [ 0.59968352,  0.94492564,  0.06138205,  0.23702306],
       [ 0.09474498,  0.06138205,  1.05824473,  0.42640187],
       [ 0.26303718,  0.23702306,  0.42640187,  0.98249091]])

$\beta$ e $\Delta z$

Vamos agora estimar as mudanças evolutivas em cada ramo da filogenia, e, usando as matrizes ancestrais, calcular os gradientes de seleção estimados.

Os $\Delta z$ são apenas as diferenças nas médias de um nó com o seu acestral. Os $\beta$ são estimados com a equação de Lande:

$ \beta = G^{-1}\Delta z $


In [30]:
delta_z = {}
beta = {}
for n in tree.nodes()[1:]: #começamos do 1 para pular a raiz, que não tem ancestral
    parent = get_node_name(n.parent_node)
    branch = get_node_name(n) + '_' + parent
    delta_z[branch] = node_means[get_node_name(n)] - node_means[parent]
    beta[branch] = np.linalg.solve(node_matrices[parent], delta_z[branch])

In [31]:
delta_z


Out[31]:
{"'A'_3": UMERO   -2.907894
 ULNA    -5.984023
 FEMUR   -6.295242
 TIBIA   -8.429543
 dtype: float64, "'B'_4": UMERO   -0.818866
 ULNA    -1.595626
 FEMUR    2.019179
 TIBIA    2.814317
 dtype: float64, "'C'_4": UMERO    0.818866
 ULNA     1.595626
 FEMUR   -2.019179
 TIBIA   -2.814317
 dtype: float64, "'D'_3": UMERO    2.907894
 ULNA     5.984023
 FEMUR    6.295242
 TIBIA    8.429543
 dtype: float64, "'E'_1": UMERO   -0.932624
 ULNA    -2.302086
 FEMUR   -1.546117
 TIBIA   -1.911554
 dtype: float64, '2_1': UMERO    0.233156
 ULNA     0.575522
 FEMUR    0.386529
 TIBIA    0.477889
 dtype: float64, '3_2': UMERO   -0.705685
 ULNA    -1.352779
 FEMUR   -1.079190
 TIBIA   -1.308057
 dtype: float64, '4_2': UMERO    0.705685
 ULNA     1.352779
 FEMUR    1.079190
 TIBIA    1.308057
 dtype: float64}

In [32]:
beta


Out[32]:
{"'A'_3": array([ 5.59390224, -6.39130607,  0.72460605, -8.17447553]),
 "'B'_4": array([-0.31553215, -2.33441576,  0.79569786,  3.1667842 ]),
 "'C'_4": array([ 0.31553215,  2.33441576, -0.79569786, -3.1667842 ]),
 "'D'_3": array([-5.59390224,  6.39130607, -0.72460605,  8.17447553]),
 "'E'_1": array([ 0.90779455, -2.46842519, -0.48869488, -1.2034227 ]),
 '2_1': array([-0.22694864,  0.6171063 ,  0.12217372,  0.30085567]),
 '3_2': array([ 0.61913253, -1.38183399, -0.29887633, -0.85456547]),
 '4_2': array([-0.61913253,  1.38183399,  0.29887633,  0.85456547])}

Podemos agora calcular a correlação entre os $\beta$ e $\Delta z$


In [33]:
def vector_corr(x, y): return (np.dot(x, y)/(np.linalg.norm(x)*np.linalg.norm(y)))
corr_beta_delta_z = {}
for branch in delta_z:
    corr_beta_delta_z[branch] = vector_corr(beta[branch], delta_z[branch])

In [34]:
corr_beta_delta_z


Out[34]:
{"'A'_3": 0.58717760896264393,
 "'B'_4": 0.92344579494465984,
 "'C'_4": 0.92344579494465984,
 "'D'_3": 0.58717760896264359,
 "'E'_1": 0.76983303332560471,
 '2_1': 0.76983303332560438,
 '3_2': 0.71383549975253713,
 '4_2': 0.71383549975253691}

Podemos também calcular a relação entre a resposta evolutiva e o primeiro componente principal da matriz de covariação, a linha de menor resistência evolutiva.

Como a direção primeiro componente é arbitrária, tomamos o valor absoluto da correlação.


In [35]:
corr_pc1 = {}
for branch in delta_z:
    parent = branch.split("_")[1]
    pc1 = np.linalg.eig(node_matrices[parent])[1][:,0]
    corr_pc1[branch] = abs(vector_corr(delta_z[branch], pc1))

In [36]:
corr_pc1


Out[36]:
{"'A'_3": 0.95299530773479413,
 "'B'_4": 0.20772619269497661,
 "'C'_4": 0.20772619269497636,
 "'D'_3": 0.95299530773479413,
 "'E'_1": 0.95783416390763121,
 '2_1': 0.95783416390763121,
 '3_2': 0.97603202614854867,
 '4_2': 0.97603202614854867}

Podemos utilizar o pandas para formatar esses resultados em tabelas


In [37]:
df_betas = pd.DataFrame.from_dict(beta, orient='index')
df_betas.columns = traits
df_betas


Out[37]:
UMERO ULNA FEMUR TIBIA
'C'_4 0.315532 2.334416 -0.795698 -3.166784
3_2 0.619133 -1.381834 -0.298876 -0.854565
2_1 -0.226949 0.617106 0.122174 0.300856
4_2 -0.619133 1.381834 0.298876 0.854565
'E'_1 0.907795 -2.468425 -0.488695 -1.203423
'A'_3 5.593902 -6.391306 0.724606 -8.174476
'B'_4 -0.315532 -2.334416 0.795698 3.166784
'D'_3 -5.593902 6.391306 -0.724606 8.174476

In [38]:
df_dz = pd.DataFrame.from_dict(delta_z, orient='index')
df_dz.columns = traits
df_dz


Out[38]:
UMERO ULNA FEMUR TIBIA
'A'_3 -2.907894 -5.984023 -6.295242 -8.429543
'B'_4 -0.818866 -1.595626 2.019179 2.814317
'C'_4 0.818866 1.595626 -2.019179 -2.814317
'D'_3 2.907894 5.984023 6.295242 8.429543
'E'_1 -0.932624 -2.302086 -1.546117 -1.911554
2_1 0.233156 0.575522 0.386529 0.477889
3_2 -0.705685 -1.352779 -1.079190 -1.308057
4_2 0.705685 1.352779 1.079190 1.308057

In [39]:
traits_c = list(traits)
traits_c.append('otu')
df_matrices = pd.DataFrame(columns=traits_c)
for node in node_matrices:
    df = pd.DataFrame(node_matrices[node], columns=traits, index = traits)
    df['otu'] = node
    df_matrices = df_matrices.append(df)
df_matrices


Out[39]:
UMERO ULNA FEMUR TIBIA otu
UMERO 1.089062 0.441458 0.173216 0.197140 'B'
ULNA 0.441458 0.953183 0.037394 0.137664 'B'
FEMUR 0.173216 0.037394 0.921217 0.679280 'B'
TIBIA 0.197140 0.137664 0.679280 1.053271 'B'
UMERO 0.985693 0.757909 0.016274 0.328934 'C'
ULNA 0.757909 0.936668 0.085370 0.336382 'C'
FEMUR 0.016274 0.085370 1.195272 0.173524 'C'
TIBIA 0.328934 0.336382 0.173524 0.911711 'C'
UMERO 0.979143 0.530966 0.197869 0.344133 1
ULNA 0.530966 0.923089 0.248147 0.319298 1
FEMUR 0.197869 0.248147 1.045010 0.500671 1
TIBIA 0.344133 0.319298 0.500671 0.989776 1
UMERO 0.925563 0.775943 0.389447 0.416946 3
ULNA 0.775943 1.042225 0.553803 0.497241 3
FEMUR 0.389447 0.553803 1.098623 0.701001 3
TIBIA 0.416946 0.497241 0.701001 0.989890 3
UMERO 0.981470 0.687813 0.242096 0.339991 2
ULNA 0.687813 0.993575 0.307592 0.367132 2
FEMUR 0.242096 0.307592 1.078434 0.563702 2
TIBIA 0.339991 0.367132 0.563702 0.986190 2
UMERO 0.969835 -0.096421 0.020960 0.360699 'E'
ULNA -0.096421 0.641145 0.010365 0.127960 'E'
FEMUR 0.020960 0.010365 0.911314 0.248547 'E'
TIBIA 0.360699 0.127960 0.248547 1.004119 'E'
UMERO 1.037378 0.599684 0.094745 0.263037 4
ULNA 0.599684 0.944926 0.061382 0.237023 4
FEMUR 0.094745 0.061382 1.058245 0.426402 4
TIBIA 0.263037 0.237023 0.426402 0.982491 4
UMERO 0.994699 0.776167 0.562616 0.558410 'D'
ULNA 0.776167 0.998145 0.790311 0.636777 'D'
FEMUR 0.562616 0.790311 1.193287 0.605724 'D'
TIBIA 0.558410 0.636777 0.605724 0.911968 'D'
UMERO 0.856428 0.775719 0.216278 0.275482 'A'
ULNA 0.775719 1.086305 0.317295 0.357705 'A'
FEMUR 0.216278 0.317295 1.003959 0.796279 'A'
TIBIA 0.275482 0.357705 0.796279 1.067811 'A'

In [ ]: